25 research outputs found
Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM
Sparsity has been widely recognized as crucial for efficient optimization in
graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect
the set of incorporated measurements, many methods for sparsification have been
proposed in hopes of reducing computation. These methods often focus narrowly
on reducing edge count without regard for structure at a global level. Such
structurally-naive techniques can fail to produce significant computational
savings, even after aggressive pruning. In contrast, simple heuristics such as
measurement decimation and keyframing are known empirically to produce
significant computation reductions. To demonstrate why, we propose a
quantitative metric called elimination complexity (EC) that bridges the
existing analytic gap between graph structure and computation. EC quantifies
the complexity of the primary computational bottleneck: the factorization step
of a Gauss-Newton iteration. Using this metric, we show rigorously that
decimation and keyframing impose favorable global structures and therefore
achieve computation reductions on the order of and , respectively,
where is the pruning rate. We additionally present numerical results
showing EC provides a good approximation of computation in both batch and
incremental (iSAM2) optimization and demonstrate that pruning methods promoting
globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201
Iterative solutions to the steady state density matrix for optomechanical systems
We present a sparse matrix permutation from graph theory that gives stable
incomplete Lower-Upper (LU) preconditioners necessary for iterative solutions
to the steady state density matrix for quantum optomechanical systems. This
reordering is efficient, adding little overhead to the computation, and results
in a marked reduction in both memory and runtime requirements compared to other
solution methods, with performance gains increasing with system size. Either of
these benchmarks can be tuned via the preconditioner accuracy and solution
tolerance. This reordering optimizes the condition number of the approximate
inverse, and is the only method found to be stable at large Hilbert space
dimensions. This allows for steady state solutions to otherwise intractable
quantum optomechanical systems.Comment: 10 pages, 5 figure
Block SOR for Kronecker structured representations
Cataloged from PDF version of article.The Kronecker structure of a hierarchical Markovian model (HMM) induces nested block
partitionings in the transition matrix of its underlying Markov chain. This paper shows how
sparse real Schur factors of certain diagonal blocks of a given partitioning induced by the
Kronecker structure can be constructed from smaller component matrices and their real Schur
factors. Furthermore, it shows how the column approximate minimum degree (COLAMD)
ordering algorithm can be used to reduce fill-in of the remaining diagonal blocks that are
sparse LU factorized. Combining these ideas, the paper proposes three-level block successive
over-relaxation (BSOR) as a competitive steady state solver for HMMs. Finally, on a set of
numerical experiments it demonstrates how these ideas reduce storage required by the factors
of the diagonal blocks and improve solution time compared to an all LU factorization implementation
of the BSOR solver.
© 2004 Elsevier Inc. All rights reserved
Parallel Optimizations for the Hierarchical Poincar\'e-Steklov Scheme (HPS)
Parallel optimizations for the 2D Hierarchical Poincar\'e-Steklov (HPS)
discretization scheme are described. HPS is a multi-domain spectral collocation
scheme that allows for combining very high order discretizations with direct
solvers, making the discretization powerful in resolving highly oscillatory
solutions to high accuracy. HPS can be viewed as a domain decomposition scheme
where the domains are connected directly through the use of a sparse direct
solver. This manuscript describes optimizations of HPS that are simple to
implement, and that leverage batched linear algebra on modern hybrid
architectures to improve the practical speed of the solver. In particular, the
manuscript demonstrates that the traditionally high cost of performing local
static condensation for discretizations involving very high local order can
be reduced dramatically
Monte Carlo on manifolds in high dimensions
We introduce an efficient numerical implementation of a Markov Chain Monte
Carlo method to sample a probability distribution on a manifold (introduced
theoretically in Zappa, Holmes-Cerfon, Goodman (2018)), where the manifold is
defined by the level set of constraint functions, and the probability
distribution may involve the pseudodeterminant of the Jacobian of the
constraints, as arises in physical sampling problems. The algorithm is easy to
implement and scales well to problems with thousands of dimensions and with
complex sets of constraints provided their Jacobian retains sparsity. The
algorithm uses direct linear algebra and requires a single matrix factorization
per proposal point, which enhances its efficiency over previously proposed
methods but becomes the computational bottleneck of the algorithm in high
dimensions. We test the algorithm on several examples inspired by soft-matter
physics and materials science to study its complexity and properties
Decompositional analysis of Kronecker structured Markov chains
This contribution proposes a decompositional iterative method with low memory requirements for the steadystate analysis ofKronecker structured Markov chains. The Markovian system is formed by a composition of subsystems using the Kronecker sum operator for local transitions and the Kronecker product operator for synchronized transitions. Even though the interactions among subsystems, which are captured by synchronized transitions, need not be weak, numerical experiments indicate that the solver benefits considerably from weak interactions among subsystems, and is to be recommended specifically in this case. © 2008, Kent State University
Block SOR preconditioned projection methods for Kronecker structured Markovian representations
Kronecker structured representations are used to cope with the state space explosion problem in Markovian modeling and analysis. Currently, an open research problem is that of devising strong preconditioners to be used with projection methods for the computation of the stationary vector of Markov chains (MCs) underlying such representations. This paper proposes a block successive overrelaxation (BSOR) preconditioner for hierarchical Markovian models (HMMs1) that are composed of multiple low-level models and a high-level model that defines the interaction among low-level models. The Kronecker structure of an HMM yields nested block partitionings in its underlying continuous-time MC which may be used in the BSOR preconditioner. The computation of the BSOR preconditioned residual in each iteration of a preconditioned projection method becomes the problem of solving multiple nonsingular linear systems whose coefficient matrices are the diagonal blocks of the chosen partitioning. The proposed BSOR preconditioner solves these systems using sparse LU or real Schur factors of diagonal blocks. The fill-in of sparse LU factorized diagonal blocks is reduced using the column approximate minimum degree (COLAMD) ordering. A set of numerical experiments is presented to show the merits of the proposed BSOR preconditioner. © 2005 Society for Industrial and Applied Mathematics
Stochastic Bundle Adjustment for Efficient and Scalable 3D Reconstruction
Current bundle adjustment solvers such as the Levenberg-Marquardt (LM)
algorithm are limited by the bottleneck in solving the Reduced Camera System
(RCS) whose dimension is proportional to the camera number. When the problem is
scaled up, this step is neither efficient in computation nor manageable for a
single compute node. In this work, we propose a stochastic bundle adjustment
algorithm which seeks to decompose the RCS approximately inside the LM
iterations to improve the efficiency and scalability. It first reformulates the
quadratic programming problem of an LM iteration based on the clustering of the
visibility graph by introducing the equality constraints across clusters. Then,
we propose to relax it into a chance constrained problem and solve it through
sampled convex program. The relaxation is intended to eliminate the
interdependence between clusters embodied by the constraints, so that a large
RCS can be decomposed into independent linear sub-problems. Numerical
experiments on unordered Internet image sets and sequential SLAM image sets, as
well as distributed experiments on large-scale datasets, have demonstrated the
high efficiency and scalability of the proposed approach. Codes are released at
https://github.com/zlthinker/STBA.Comment: Accepted by ECCV 202